EN FR
EN FR


Section: Research Program

High Performance methods for solving wave equations

Seismic Imaging of realistic 3D complex elastodynamic media does not only require advanced mathematical methods but also High Performing Computing (HPC) technologies, both from a software and hardware point of view. In the framework of our collaboration with Total, we are optimizing our algorithms, based on Discontinuous Galerkin methods, in the following directions.

  • Minimizing the communications between each processor. One of the main advantages of Discontinuous Galerkin methods is that most of the calculi can be performed locally on each element of the mesh. The communications are carried out by the computations of fluxes on the faces of the elements. Hence, there are only communications between elements sharing a common face. This represents a considerable gain compared with Continuous Finite Element methods where the communications have to be done between elements sharing a common degree of freedom. However, the communications can still be minimized by judiciously choosing the quantities to be passed from one element to another.

  • Hybrid MPI and OpenMP parallel programming. Since the communications are one of the main bottlenecks for the implementation of the Discontinuous Galerkin in an HPC framework, it is necessary to avoid these communications between two processors sharing the same RAM. To this aim, the partition of the mesh is not performed at the core level but at the chip level and the parallelization between two cores of the same chip is done using OpenMP while the parallelization between two cores of two different chips is done using MPI.

  • Porting the code on new architectures. We are now planning to port the code on the new Intel Many Integrated Core Architecture (Intel MIC). The optimization of this code has begun in 2013, in collaboration with Dider Rémy from SGI.

  • Using Runtimes Systems. One of the main issue of optimization of parallel code is the portability between different architectures. Indeed, many optimizations performed for a specific architecture are often useless for another architecture. In some cases, they may even reduce the performance of the code. Task programming libraries such as StarPU (http://runtime.bordeaux.inria.fr/StarPU/ or DAGuE (http://icl.cs.utk.edu/dague/index.html ) seem to be very promising to improve the portability of the code. These libraries handle the repartition of workloads between processors directly at the runtime level. However, until now, they have been mostly employed for solving linear algebra problems and we wish to test their performance on realistic wave propagation simulations. This is done in the framework of a collaboration with Inria Team Hiepacs and Georges Bosilca (University of Tennessee).

We are confident in the fact that the optimizations of the code will allow us to perform large-scale calculations and inversion of geophysical data for models and distributed data volumes with a resolution level impossible to reach in the past.